74 research outputs found

    Unsupervised Algorithms for Microarray Sample Stratification

    Get PDF
    The amount of data made available by microarrays gives researchers the opportunity to delve into the complexity of biological systems. However, the noisy and extremely high-dimensional nature of this kind of data poses significant challenges. Microarrays allow for the parallel measurement of thousands of molecular objects spanning different layers of interactions. In order to be able to discover hidden patterns, the most disparate analytical techniques have been proposed. Here, we describe the basic methodologies to approach the analysis of microarray datasets that focus on the task of (sub)group discovery.Peer reviewe

    Supervised Methods for Biomarker Detection from Microarray Experiments

    Get PDF
    Biomarkers are valuable indicators of the state of a biological system. Microarray technology has been extensively used to identify biomarkers and build computational predictive models for disease prognosis, drug sensitivity and toxicity evaluations. Activation biomarkers can be used to understand the underlying signaling cascades, mechanisms of action and biological cross talk. Biomarker detection from microarray data requires several considerations both from the biological and computational points of view. In this chapter, we describe the main methodology used in biomarkers discovery and predictive modeling and we address some of the related challenges. Moreover, we discuss biomarker validation and give some insights into multiomics strategies for biomarker detection.Non peer reviewe

    TinderMIX : Time-dose integrated modelling of toxicogenomics data

    Get PDF
    Background: Omics technologies have been widely applied in toxicology studies to investigate the effects of different substances on exposed biological systems. A classical toxicogenomic study consists in testing the effects of a compound at different dose levels and different time points. The main challenge consists in identifying the gene alteration patterns that are correlated to doses and time points. The majority of existing methods for toxicogenomics data analysis allow the study of the molecular alteration after the exposure (or treatment) at each time point individually. However, this kind of analysis cannot identify dynamic (time-dependent) events of dose responsiveness. Results: We propose TinderMIX, an approach that simultaneously models the effects of time and dose on the transcriptome to investigate the course of molecular alterations exerted in response to the exposure. Starting from gene log fold-change, TinderMIX fits different integrated time and dose models to each gene, selects the optimal one, and computes its time and dose effect map; then a user-selected threshold is applied to identify the responsive area on each map and verify whether the gene shows a dynamic (time-dependent) and dose-dependent response; eventually, responsive genes are labelled according to the integrated time and dose point of departure. Conclusions: To showcase the TinderMIX method, we analysed 2 drugs from the Open TG-GATEs dataset, namely, cyclosporin A and thioacetamide. We first identified the dynamic dose-dependent mechanism of action of each drug and compared them. Our analysis highlights that different time- and dose-integrated point of departure recapitulates the toxicity potential of the compounds as well as their dynamic dose-dependent mechanism of action.Peer reviewe

    Integrated Network Pharmacology Approach for Drug Combination Discovery : A Multi-Cancer Case Study

    Get PDF
    Simple Summary Current treatments for complex diseases, including cancer, are generally characterized by high toxicity due to their low selectivity for target cells. Moreover, patients often develop drug resistance, hence becoming less sensitive to the therapy. For this reason, novel, improved, and more specific pharmacological therapies are needed. The high cost and the time required to develop new drugs poses the attention on the development of computational methods for drug repositioning and combination therapy prediction. In this study, we developed an integrated network pharmacology framework that combines mechanistic and chemocentric approaches in order to predict potential drug combinations for cancer therapy. We applied our paradigm in five cancer types, which we used as case studies. Our strategy can be applied to the study of any complex disease by guiding the prioritization of drug combinations. Despite remarkable efforts of computational and predictive pharmacology to improve therapeutic strategies for complex diseases, only in a few cases have the predictions been eventually employed in the clinics. One of the reasons behind this drawback is that current predictive approaches are based only on the integration of molecular perturbation of a certain disease with drug sensitivity signatures, neglecting intrinsic properties of the drugs. Here we integrate mechanistic and chemocentric approaches to drug repositioning by developing an innovative network pharmacology strategy. We developed a multilayer network-based computational framework integrating perturbational signatures of the disease as well as intrinsic characteristics of the drugs, such as their mechanism of action and chemical structure. We present five case studies carried out on public data from The Cancer Genome Atlas, including invasive breast cancer, colon adenocarcinoma, lung squamous cell carcinoma, hepatocellular carcinoma and prostate adenocarcinoma. Our results highlight paclitaxel as a suitable drug for combination therapy for many of the considered cancer types. In addition, several non-cancer-related genes representing unusual drug targets were identified as potential candidates for pharmacological treatment of cancer.Peer reviewe

    Toxicogenomics Data for Chemical Safety Assessment and Development of New Approach Methodologies : An Adverse Outcome Pathway-Based Approach

    Get PDF
    Mechanistic toxicology provides a powerful approach to inform on the safety of chemicals and the development of safe-by-design compounds. Although toxicogenomics supports mechanistic evaluation of chemical exposures, its implementation into the regulatory framework is hindered by uncertainties in the analysis and interpretation of such data. The use of mechanistic evidence through the adverse outcome pathway (AOP) concept is promoted for the development of new approach methodologies (NAMs) that can reduce animal experimentation. However, to unleash the full potential of AOPs and build confidence into toxicogenomics, robust associations between AOPs and patterns of molecular alteration need to be established. Systematic curation of molecular events to AOPs will create the much-needed link between toxicogenomics and systemic mechanisms depicted by the AOPs. This, in turn, will introduce novel ways of benefitting from the AOPs, including predictive models and targeted assays, while also reducing the need for multiple testing strategies. Hence, a multi-step strategy to annotate AOPs is developed, and the resulting associations are applied to successfully highlight relevant adverse outcomes for chemical exposures with strong in vitro and in vivo convergence, supporting chemical grouping and other data-driven approaches. Finally, a panel of AOP-derived in vitro biomarkers for pulmonary fibrosis (PF) is identified and experimentally validated.Peer reviewe

    Advances in De Novo Drug Design : From Conventional to Machine Learning Methods

    Get PDF
    De novo drug design is a computational approach that generates novel molecular structures from atomic building blocks with no a priori relationships. Conventional methods include structure-based and ligand-based design, which depend on the properties of the active site of a biological target or its known active binders, respectively. Artificial intelligence, including ma-chine learning, is an emerging field that has positively impacted the drug discovery process. Deep reinforcement learning is a subdivision of machine learning that combines artificial neural networks with reinforcement-learning architectures. This method has successfully been em-ployed to develop novel de novo drug design approaches using a variety of artificial networks including recurrent neural networks, convolutional neural networks, generative adversarial networks, and autoencoders. This review article summarizes advances in de novo drug design, from conventional growth algorithms to advanced machine-learning methodologies and high-lights hot topics for further development.Peer reviewe

    Nextcast : A software suite to analyse and model toxicogenomics data

    Get PDF
    The recent advancements in toxicogenomics have led to the availability of large omics data sets, representing the starting point for studying the exposure mechanism of action and identifying candidate biomarkers for toxicity prediction. The current lack of standard methods in data generation and analysis hampers the full exploitation of toxicogenomics-based evidence in regulatory risk assessment. Moreover, the pipelines for the preprocessing and downstream analyses of toxicogenomic data sets can be quite challenging to implement. During the years, we have developed a number of software packages to address specific questions related to multiple steps of toxicogenomics data analysis and modelling. In this review we present the Nextcast software collection and discuss how its individual tools can be combined into efficient pipelines to answer specific biological questions. Nextcast components are of great support to the scientific community for analysing and interpreting large data sets for the toxicity evaluation of compounds in an unbiased, straightforward, and reliable manner. The Nextcast software suite is available at: ( https://github.com/fhaive/nextcast).(c) 2022 The Authors. Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Peer reviewe

    Computationally prioritized drugs inhibit SARS-CoV-2 infection and syncytia formation

    Get PDF
    The pharmacological arsenal against the COVID-19 pandemic is largely based on generic anti-inflammatory strategies or poorly scalable solutions. Moreover, as the ongoing vaccination campaign is rolling slower than wished, affordable and effective therapeutics are needed. To this end, there is increasing attention toward computational methods for drug repositioning and de novo drug design. Here, multiple data-driven computational approaches are systematically integrated to perform a virtual screening and prioritize candidate drugs for the treatment of COVID-19. From the list of prioritized drugs, a subset of representative candidates to test in human cells is selected. Two compounds, 7-hydroxystaurosporine and bafetinib, show synergistic antiviral effects in vitro and strongly inhibit viral-induced syncytia formation. Moreover, since existing drug repositioning methods provide limited usable information for de novo drug design, the relevant chemical substructures of the identified drugs are extracted to provide a chemical vocabulary that may help to design new effective drugs.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series
    • …
    corecore